Search Results: "terceiro"

2 September 2014

Antonio Terceiro: DebConf 14: Community, Debian CI, Ruby, Redmine, and Noosfero

This time, for personal reasons I wasn t able to attend the full DebConf, which started on the Saturday August 22nd. I arrived at Portland on the Tuesday the 26th by noon, at the 4th of the conference. Even though I would like to arrive earlier, the loss was alleviated by the work of the amazing DebConf video team. I was able to follow remotely most of the sessions I would like to attend if I were there already. As I will say to everyone, DebConf is for sure the best conference I have ever attended. The technical and philosophical discussions that take place in talks, BoF sessions or even unplanned ad-hoc gathering are deep. The hacking moments where you have a chance to pair with fellow developers, with whom you usually only have contact remotely via IRC or email, are precious. That is all great. But definitively, catching up with old friends, and making new ones, is what makes DebConf so special. Your old friends are your old friends, and meeting them again after so much time is always a pleasure. New friendships will already start with a powerful bond, which is being part of the Debian community. Being only 4 hours behind my home time zone, jetlag wasn t a big problem during the day. However, I was waking up too early in the morning and consequently getting tired very early at night, so I mostly didn t go out after hacklabs were closed at 10PM. Despite all of the discussion, being in the audience for several talks, other social interactions and whatnot, during this DebConf I have managed to do quite some useful work. debci and the Debian Continuous Integration project I gave a talk where I discussed past, present, and future of debci and the Debian Continuous Integration project. The slides are available, as well as the video recording. One thing I want you to take away is that there is a difference between debci and the Debian Continuous Integration project: A few days before DebConf, C dric Boutillier managed to extract gem2deb-test-runner from gem2deb, so that autopkgtest tests can be run against any Ruby package that has tests by running gem2deb-test-runner --autopkgtest. gem2deb-test-runner will do the right thing, make sure that the tests don t use code from the source package, but instead run them against the installed package. Then, right after my talk I was glad to discover that the Perl team is also working on a similar tool that will automate running tests for their packages against the installed package. We agreed that they will send me a whitelist of packages in which we could just call that tool and have it do The Right Thing. We might be talking here about getting autopkgtest support (and consequentially continuous integration) for free for almost 2000 4000 packages. The missing bits for this to happen are: During a few days I have mentored Lucas Kanashiro, who also attended DebConf, on writing a patch to add support for email notifications in debci so maintainers can be pro-actively notified of status changes (pass/fail, fail/pass) in their packages. I have also started hacking on the support for distributed workers, based on the initial work by Martin Pitt: Ruby I had some discusion with Christian about making Rubygems install to $HOME by default when the user is not root. We discussed a few implementation options, and while I don t have a solution yet, we have a better understanding of the potential pitfalls. The Ruby BoF session on Friday produced a few interesting discussions. Some take away point include, but are not limited to: Redmine I was able to make Redmine work with the Rails 4 stack we currently have in unstable/testing. This required using a snapshot of the still unreleased version 3.0 based on the rails-4.1 branch in the upstream Subversion repository as source. I am a little nervous about using a upstream snapshot, though. According to the "roadmap of the project ":http://www.redmine.org/projects/redmine/roadmap the only purpose of the 3.0 release will be to upgrade to Rails 4, but before that happens there should be a 2.6.0 release that is also not released yet. 3.0 should be equivalent to that 2.6.0 version both feature-wise and, specially, bug-wise. The only problem is that we don t know what that 2.6.0 looks like yet. According to the roadmap it seems there is not much left in term of features for 2.6.0, though. The updated package is not in unstable yet, but will be soon. It needs more testing, and a good update to the documentation. Those interested in helping to test Redmine on jessie before the freeze please get in touch with me. Noosfero I gave a lighting talk on Noosfero, a platform for social networking websites I am upstream for. It is a Rails appplication licensed under the AGPLv3, and there are packages for wheezy. You can checkout the slides I used. Video recording is not available yet, but should be soon. That s it. I am looking forward to DebConf 15 at Heidelberg. :-)

11 June 2014

Bits from Debian: Introducing the Debian Continuous Integration project

Debian is a big system. At the time of writing, the unstable distribution has more than 20,000 source packages, building more then 40,000 binary packages on the amd64 architecture. The number of inter-dependencies between binary packages is mind-boggling: the entire dependency graph for the amd64 architecture contains a little more than 375,000 edges. If you want to expand the phrase "package A depends on package B", there are more than 375,000 pairs of packages A and B that can be used. Every one of these dependencies is a potential source of problems. A library changes the semantics of a function call, and then programs using that library that assumed the previous semantics can start to malfunction. A new version of your favorite programming language comes out, and a program written in it no longer works. The number of ways in which things can go wrong goes on and on. With an ecosystem as big as Debian, it is just impossible to stop these problems from happening. What we can do is trying to detect when they happen, and fix them as soon as possible. The Debian Continuous Integration project was created to address exactly this problem. It will continuously run test suites for source packages when any of their dependencies is updated, as well as when a new version of the package itself is uploaded to the unstable distribution. If any problems that can be detected by running an automated test suite arise, package maintainers can be notified in a matter of hours. Antonio Terceiro has posted on his blog an introduction to the project with a more detailed description of the project, its evolution since January 2014 when it was first introduced, an explanation of how the system works, and how maintainers can enable test suites for their packages. You might also want to check the documentation directly.

1 June 2014

Antonio Terceiro: An introduction to the Debian Continuous Integration project

Debian is a big system. At the time of writing, looking at my local package list caches tells me that the unstable suite contains 21306 source packages, and 42867 binary packages on amd64. Between these 42867 binary packages, there is an unthinkable number of inter-package dependencies. For example the dependency graph of the ruby packages contains other 20-something packages. A new version of any of these packages can potentially break some functionality in the ruby package. And that dependency graph is very small. Looking at the dependency graph for, say, the rails package will make your eyes bleed. I tried it here, and GraphViz needed a PNG image with 7653 10003 pixels to draw it. It ain t pretty. Installing rails on a clean Debian system will pull in another 109 packages as part of the dependency chain. Again, as new versions of those packages are uploaded the archive, there is a probability that a backwards-incompatible change, or even a bug fix which was being worked around, might make some funcionality in rails stop working. Even if that probability is low for each package in the dependency chain, with enough packages the probability of any of them causing problems for rails is quite high. And still the rails dependency chain is not that big. libreoffice will pull in another 264 packages. gnome will pull in 1311 dependencies, and kde-full 1320 (!). With a system this big, problems will arrive, and that s a fact of life. As developers, what we can do is try to spot these problems as early as possible, and fixing them in time to make a solid release with the high quality Debian is known for. While automated testing is not the proverbial Silver Bullet of Software Engineering, it is an effective way of finding regressions. Back in 2006, Ian Jackson started the development of autopkgtest as a tool to test Debian packages in their installed form (as opposed to testing packages using their source tree). In 2011, the autopkgtest test suite format was proposed as a standard for the Debian project, in what we now know as the DEP-8 specification. Since then, some maintainers such as myself started experimenting with DEP-8 tests in their packages. There was an expectation in the air that someday, someone would run those tests for the entire archive, and that would be a precious source of QA information. Durign the holiday break last year, I decided to give it a shot. I initially called the codebase dep8. Later I renamed it to debci, since it could potentially also run other other types of test suites in the future. Since early January, ci.debian.net run an instance of debci for the Debian Project. The Debian continuous Integration will trigger tests at most 4 times a day, 3 hours after each dinstall run. It will update a local APT cache and look for packages that declare a DEP-8 test suite. Each package with a test suite will then have its test suite executed if there was any change in its dependency chain since the last test run. Existing test results are published at ci.debian.net every hour, and at the end of each batch a global status is updated. Maintainers can subscribe to a per package Atom feed to keep up with their package test results. People interested in the overall status can subscribe to a global Atom feed of events. Since the introduction of Debian CI in mid-January 2014, we have seen an amazing increase in the number of packages with test suites. We had little less than 200 packages with test suites back then, against around 350 now (early June 2014). The ratio of packages passing passing their test suite has also improved a lot, going from less than 50% to more than 75%. There is documentation available, including a FAQ for package maintainers with further information about how the system works, how to declare test suites in their packages and how to reproduce test runs locally. Also available is development information about debci itself, to those inclined to help improve the system. This is just the beginning. debci is under a good rate of development, and you can expect to see a constant flux of improvements. In special, I would like to mention a few people who are giving amazing contributions to the project:

12 February 2014

Christoph Berg: ci.debian.net on DDPO

More and more packages are getting autopkgtest aka DEP-8 testsuites these days. Thanks to Antonio Terceiro, there is ci.debian.net" running the tests. Last weekend, I've added a "CI" column on DDPO that shows the current test results for your packages. Enjoy, and add tests to your packages!

24 February 2013

Antonio Terceiro: Ruby 2.0 released, with multiarch support

Ruby 2.0 was released today. This new version of the language brings some very interesting features, and according to the core team, an effort has been made to keep source level compartibility with Ruby 1.9. Debian packaging is under way and should hit NEW soon. During the last few days I gave more attention to getting the new multiarch support fixed upstream than to the packaging bits, but the remaining packaging work should be pretty much about housekeeping. Next steps from a Debian point of view (after Wheezy is out) include: Now let s get back to fixing RC bugs and getting Wheezy released. :-)

21 November 2012

Antonio Terceiro: Today I am turning 20 ...

in hexadecimal. I have been waiting for a long time to be able to make this joke, and I will still be able to do it for the next 9 years! :-)

28 June 2012

Antonio Terceiro: Wheezy is Coming

The original image (in wallpaper resolution) was made^Whacked by my friend Aur lio .

20 May 2012

Antonio Terceiro: How to make a grad student freak out

So that grad student has just submitted his thesis and you are invited to review it. With these quick tips, you can test the limits of the student s willpower, and make his degree a little more deserved by putting him up for some constructive mind games.

29 February 2012

Antonio Terceiro: Thesis submitted

Last Friday, after 5 long years, I have finally submitted my PhD thesis. It was quite a relief, more or less as if an elephant was taken off my back. An English title for my thesis would be Structural Complexity Characterization in Software Systems. Here is an abstract:
This thesis proposes a theory to characterize structural complexity in software systems. This theory aims to identify (i) the contribution of several factors to the structural complexity variation and (ii) the effects of structural complexity in software projects. Possible factors in the structural complexity variation include: human factors, such as general experience of the developers and their familiarity with the different parts of the system; factors related to the changes performed on the system, such as size variation and change diffusion; and organizational factors, such as the maturity of the software development process and the communication structure of the project. Effects of structural complexity include higher effort, and consequently higher cost, in software comprehension and maintenance activities. To test the validity of the proposed theory, four empirical studies were performed, mining data from free software project repositories. We analyzed historical data from changes performed in 13 systems from different application domains and written in different programming languages. The results of these studies indicated that all the factors studied influenced the structural complexity variation significantly in at least one of the projects, but different projects were influenced by different sets of factors. The models obtained were capable of describing up to 93% of the structural complexity variation in the projects analyzed. Keywords: Structural Complexity, Software Maintainance, Human factors in Software Engineering, Mining Software Repositories, Theories in Software Engineering, Empirical Software Engineering, Free/Open Source Software Projects.
Those who read Portuguese can check out the actual thesis text as a PDF file. Most of the studies discussed in the thesis are presented in English in papers I have published during the last years. My defense is going to be on March 23rd. If you happen to be at Salvador at that day, please feel cordially invited.

28 January 2012

Antonio Terceiro: A visual cheat sheet for ANSI color codes

Now and then I want to output some ANSI color escape codes from software I write, and I always end up doing some trial-and-error to figure out the exact codes I want. Sometimes it s overkill to add a dependency on an existing library that already deals with it, or the language I am using does not have one. There are a lot of listings of the ANSI color codes out there, but I couldn t find one that matches the actual codes with the resulting effect in a visual way. Even the Wikipedia article has a colored table with the actual colors, but I have to lookup manually which code combination produces which color. So I spent a few minutes to write a shell script that prints all useful combinations, formatted with themselves. This way I can quickly figure out which exact code I want to achieve the desired effect. The code for now is very simple:

#!/bin/sh -e
for attr in $(seq 0 1); do
  for fg in $(seq 30 37); do
    for bg in $(seq 40 47); do
      printf "\033[$attr;$ bg ;$ fg m$attr;$fg;$bg\033[m " 
    done
    echo
  done
done

Is there a package in Debian that already does that? Would people find it useful to have this packaged? update: it turns out you can find some similar stuff on google images. It was a quick and fun hack, though. update 2: Replacing echo -n with printf makes the script work independently if /bin/sh is bash or dash. Thanks to cocci for pointing that out.

9 January 2012

Antonio Terceiro: Life after exec()

From the not necessarily big news, but still useful department. The situation: for Very Good Reasons 1, you want to replace your current process by calling exec(), but you still want to have the chance to do something after the process you exec()ed finishes. This is a simple technique I just came up with: just before replacing the current process by calling exec(), you fork() a process in the background that will wait for the current process id to disappear from the process list, and then does whatever you want to do. A simple proof-of-concept I wrote is composed of two bash programs: wrapper and real. real is really simple: it just waits a few seconds and then prints its process id to the console:
#!/bin/bash
sleep 5
echo $BASHPID
wrapper is the program that handles the situation we want to exercise: it replaces itself with real, but still has the chance to do something after real finishes. In this case, wrapper notifies the user that real finished.
#!/bin/bash
echo $BASHPID
real_program_pid=$BASHPID
(
  while ps -p "$real_program_pid" >/dev/null; do
    sleep 0.1s
  done
  notify-send 'real program finished'
) &
exec ./real
One nice property that wrapper explores is that when exec() starts real, it really replaces wrapper, and therefore has the same process id (in this case accessible by bash in the $BASHPID variable). Because of this, the background process that wrapper starts just before the exec() call already knows which process it has to watch for. The actual code for waiting is not optimal, though. I cannot use waitpid() (the wait builtin in bash), since real is not a child process of wrapper. I went with a brute force approach here, and I am pretty sure there is a cheaper way to wait for a random PID without a busy loop (but that wasn t the point here). 1 update: I am aware of the classic fork()/exec() pattern. My Very Good Reasons include the fact that I can t control the flow: I am writing a plugin for a program that calls its plugins in sequence, and after that, calls exec(), but my plugin is interested in doing some work after exec() finishes.

12 September 2011

Antonio Terceiro: Laptop fun, or "WTF, HP?"

A couple of weeks ago my old laptop decided to rest forever, and I was forced to impose some economic pressure onto the consumption of the planet resources by acquiring a new one. Fortunately this happened while I am still here in Canada, where it is reasonably cheaper to get a decent laptop than it is in Brazil. Although I did not have the budget I wanted for buying a really kick ass laptop, I was able to buy a decent one, an HP Pavilion G6 1B74CA. I went on 4 different shops with a USB stick loaded with Debian Live to check whether all the hardware would work ok. I always asked one of the salesman before rebooting the laptops, but it was funny to see the reactions of the different employees who came by to check what I was doing: some of them barely noticed that GNOME was not Windows XYZ, and and some asked whether I was hacking the laptops. Unfortunately, I wasn t able to find a single laptop in which the wireless worked out of the box with the Squeeze kernel, which sucks. I searched over the internet a lot, and it seems that even the vendors recommended by the FSF do not provide laptops with wireless cards that work without non-free blobs. Other issue I had was with the Intel graphics. After the kernel enables the modesetting, the backlight goes to the minimum and it looks like you have no video. There are a couple of workarounds in the internet, and the one in which you add acpi_osi=Linux acpi_backlight=vendor to the kernel parameters makes the laptop turn on by itself in the morning. This is probably caused by broken ACPI handling in the BIOS, who almost always is written by people on crack. The other issues I had were related to the keyboard. First, the BIOS came by default with Access keys mode enabled, which means that by default pressing F2-F12 actually activated the multimedia keys instead of the real function keys. It was disapointing to hit F12 and have my wireless turned off. After disabling this in the BIOS setup, it was OK. Well, not quite: for some reason, Fn+F4 did not generate the expected keycodes. After some research on the internet, I found a interesting Ubuntu bug . It seems that the latest and greatest version of Windows starts its video setup application via the Meta+P shortcut ( Meta is the actual name of what Windows people call the Windows key ). Guess what the morons writing the HP BIOS did yes, they made the keys Fn+F4 (where F4 has video setup as its multimedia key ) generate the keycodes for Meta+P! It would be a lot easier for everyone in the world if the people at Microsoft just make their stuff listen to both Meta+P and the video setup keycode, which is generated by every laptop out there, to activate the Windows screen setup thing. This way the very smart dudes writing BIOS at HP wouldn t need to make Fn+F4 generate the keycodes for Meta+P and break every single desktop that is not procuced at Redmond.

14 August 2011

Antonio Terceiro: Handling upstream patches with git-export-debian-patches

These days I briefly discussed with a fellow Debian developer about how to maintain upstream patches in Debian packages with Git, what brought me to rethink a little about my current practices. What I usually do is pretty much like point 4 in Raphael's post "4 tips to maintain a 3.0 (quilt) Debian source package in a VCS": I make commits in the Debian packaging branch, or in a separate branch that is merged into the Debian packaging branch. Then I add the single-debian-patch option to debian/source/options so that a single Debian patch is generated, and include a patch header that points people interested in the individual changes to the public Git repository where they were originally done. My reasoning for doing so was the following: most upstream developers will hardly care enough to come check the patches applied against their source in Debian, so it's not so important to have a clean source package with separated and explained patches. But then there is the people who will actually care about the patches: other distribution developers. Not imposing a specific VCS on them to review the patches applied in Debian is a nice thing to do. Then I wrote a script called git-export-debian-patches (download, manpage), which was partly inspired by David Bremner's script. It exports all commits in the Debian packaging branch that do not touch files under debian/ and were not applied upstream to debian/patches. The script also creates an appropriate debian/patches/series files. The script is even smart enough to detect patches that were later reverted in the Debian branch and exclude them (and the commit that reverted them) from the patch list. The advantage I see over gbp-pq is that I don't need to rebase (and thus lose history) to have a clean set of patches. The advantage over the gitpkg quilt-patches-deb-export-hook hook is that I don't need to explicitly say which ranges I want: every change that is merged in master, was not applied upstream and was not reverted gets listed as a patch. To be honest I don't have any experience with either gbp-pq or gitpkg and these advantages were based on what I read, so please leave a (nice ;-)) comment if I said something stupid. I am looking forward to receive feedback about the tool, specially about potential corner cases in which it would break. For now I have tested it in a package with simple changes agains upstream source, and it seems fine.

7 August 2011

Antonio Terceiro: Playing with JPEG quality and file size

This is probably no news at all for graphics/image processing experts, but its something I've just learnt myself and I thought it would be fun to share. I am writing a static HTML photo algum generator and was a little suspicious of the size of the generated JPEG images. I thought "well, these JPEGs should not be that large ..." I did some quick research and found out that ImageMagick uses JPEG quality 92 by default and was curious how file size would vary as I changed the output quality. Then I took an image and produced thumbnails for it with the "JPEG quality" parameter ranging from 1 to 100 to check 1) how the file size varies with quality and 2) how much quality actually makes any difference when viewing the images. To generate the thumbnails with varying quality, I did the following:
$ for i in $(seq -f %03g 1 100); do convert -scale 640x480 -quality $i /path/to/original.jpg $i.jpg; echo $i; done
Then I generated a data file by calculating the size of each file with du and piping the results through sed and awk:
$ du -b [0-9]*.jpg   sed 's/.jpg//'   awk '  print $2 " " $1  '
The generated data file looks this, with JPEG quality in first column and file size in bytes in the second column:
001 20380
002 20383
003 20634
004 21106
[...]
Regarding to file size, it seems like between 1 and 50, file size grows sublinearly with quality. Beyond that, the curve reaches an inflection point and grows in a way that looks, if not exponentially, at least polynomially.Jpeg-file-size-by-quality The above plot was produced in a R session that looked like this:
$ R
R version 2.13.1 (2011-07-08)
Copyright (C) 2011 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: i486-pc-linux-gnu (32-bit)
R   um software livre e vem sem GARANTIA ALGUMA.
Voc  pode redistribu -lo sob certas circunst ncias.
Digite 'license()' ou 'licence()' para detalhes de distribui o.
R   um projeto colaborativo com muitos contribuidores.
Digite 'contributors()' para obter mais informa es e
'citation()' para saber como citar o R ou pacotes do R em publica es.
Digite 'demo()' para demonstra es, 'help()' para o sistema on-line de ajuda,
ou 'help.start()' para abrir o sistema de ajuda em HTML no seu navegador.
Digite 'q()' para sair do R.
> png()
> data <- read.table('points.dat')
> quality <- data[[1]]
> quality
  [1]   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18
 [19]  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36
 [37]  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54
 [55]  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72
 [73]  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90
 [91]  91  92  93  94  95  96  97  98  99 100
> filesize <- data[[2]]
> filesize
  [1]  20380  20383  20634  21106  21551  22012  22469  22878  23323  23715
 [11]  24103  24494  24952  25327  25725  26127  26507  26886  27216  27550
 [21]  27917  28288  28627  28945  29271  29583  29919  30280  30516  30813
 [31]  31099  31367  31679  31873  32232  32538  32704  33072  33324  33443
 [41]  33860  34055  34253  34633  34804  35074  35216  35491  35871  35935
 [51]  36030  36443  36743  36898  37120  37382  37726  38077  38307  38581
 [61]  39002  39270  39700  39962  40388  40762  41086  41629  42062  42544
 [71]  43048  43392  44062  44824  45023  45682  46532  47347  47833  48701
 [81]  49612  50423  51694  52637  53635  55243  56340  58304  59709  62162
 [91]  64207  66273  70073  74617  79917  86745  94950 105680 128158 145937
> plot(quality, filesize, xlab = 'JPEG Quality', ylab = 'File size')
> 
Save workspace image? [y/n/c]: y
Looking at the actual generated thumbnails, somewhere after quality > 60 I stopped noticing the difference between increasing quality factors. Settling with a default quality of 75 seems to be good enough: the resulting static HTML album generated from a folder with 82 pictures dropped from 12MB with the default ImageMagick quality factor to 6MB with quality 75, with very little perceivable image quality loss.

23 July 2011

Antonio Terceiro: Hello, Planet Debian

Hello, world! So, now this blog is being syndicated on Planet Debian, and I decided to write something to introduce myself. I was born in Salvador, Bahia, Brazil. I am married to Josy, a wonderful person; we do not have any kids yet, but anyone can notice we both discretely drooling when close to small children. I have a passion for building things, and making things work. Programming happens to be the better building toy that someone already invented. I became am official Debian Developer a little more than one week ago, but I have been working in the project for some time already. In the very beginning I was involved with the Perl group, but now almost all my work is in the Ruby team. I am a PhD student at Federal University of Bahia under supervision of professor Christina Chavez. My research involves three topics I am just fascinated about: Free ("and Open Source") Software, Software Design and Empirical Software Engineering. I am investigating whether it is possible to explain the variation in Structural Complexity by analyzing developer attributes. I want to answer questions such as "do developers with more experience in a project produce more complex code?", or "do developers that focus on a single part of the project produce more complex code than developers who work on the entire project?". As part of my PhD research I have been working on analizo, a multi-language source code analysis and visualization toolkit. I plan to write specifically about it in a later opportunity. My other job is running my own company, Colivre, together with several other great people. Colivre is a cooperative, which basically means that everyone who works there is also an owner of the company. We provide web solutions, in special social networking environments, social media websites and the like. Working on (and with) free software is our premise at Colivre, and most of our work involves either Noosfero or Foswiki. Right now I am taking some time off from Colivre to be able to concentrate on (and finish!) my PhD. Until late September I live at the beautiful Vancouver, BC, Canada, where I am currenly a visiting reasearcher at the Software Practices Lab at UBC under supervision of professor Gail Murphy. So that's it, I hope to post interesting stuff here and to get your feedback whenever you find it's worth. I also maintain a microblog at identi.ca, where you'll find shorter (and more frequent) updates.

10 May 2011

Niels Thykier: Lintian 2.5.0 Overrides and other changes

In the new version of Lintian there has been some more changes to overrides as well as a couple of new changes. First of, we fixed a false-positive with the embedded-library tag. Lintian would incorrectly use the source-field of a binary package, when figuring out if a package was the real library and not an offender embedding the library. However, this field is allowed to contain the version of the source package (if you remember Policy 5.6.1) and Lintian did not correctly cope with that. Speaking of embedded libraries, we have accepted a (series of) patch(es) from Marcelo Jorge Vieira to detect a number of embedded versions of the jQuery javascript library (libjs-query-*). Antonio Terceiro also gave a couple of patches two make Lintian more accurate with the recent changes for Ruby packages. Furthermore we also added a new experimental tag to catch duplicate files in /usr/share/doc. Lintian 2.5.0 also modifies the syntax and semantics of the overrides file. In 2.5.0 and newer all * after the tag name are wildcards; previously they only acted as wildcards in the beginning or the end of text after the tag. The Multi-Arch -aware reader might have noticed that this is not enough for packages marked Multi-Arch: same . Some packages might emit different tags on different architectures and all files in the package (incl. the override file) must be byte-for-byte identical if the path is the same on all architectures. Andreas Beckmann proposed a solution that we have accepted, namely architecture dependent overrides. With 2.5.0 and newer you can specify that a certain override is only for certain architecture. The parser is currently somewhat naive and forgiving, so it does not support architecture wildcards and it will not check that the architectures are valid. Now you get to do overrides like this:
# We like our code without pic on x86, thank you
[i386]: shlib-with-non-pic-code
If you had not guessed it, we use the same format as is used in the Build-Depends field (except for the lack of wildcard support). So you should be familiar with it. :) Note that the syntax of the overrides should be backwards compatible, so unlike the 2.5.0~rc1 upload, your overrides should still work! A little heads up for people going to DebConf11: We will do a Lintian BoF again this year, yay!

16 August 2007

Alexis Sukrieh: Perl Console 0.2 Debian package

The first version of the debian package of Perl Console has been uploaded to the NEW queue. For those who are waiting for it, I’ve also uploaded the package here. Thanks to the patch sent by Antonio Terceiro, the version 0.3 will be properly packaged ala Perl (namely with the famous Makefile.PL, MANIFEST and friends). I plan to adress the multi-line issue for 0.3 (mainly handling code with loops or conditional structures), as Florian Ragwitz underlined, it could be worth using Devel::REPL instead of rewriting the wheel.

Next.

Previous.